Optimization of Fuzzy System Inference Model on Mini Batch Gradient Descent

نویسندگان

چکیده

Optimization is one of the factors in machine learning to help model training during backpropagation. This conducted by adjusting weights minimize loss function and overcome dimensional problems. Also, gradient descent method a simple approach backpropagation solve minimum The mini-batch (MBGD) methods proven be powerful for large-scale learning. addition several approaches MBGD such as AB, BN, UR can accelerate convergence process, hence, algorithm becomes faster more effective. added will perform an optimization process on results data rule that has been processed its objective function. processing showed MBGD-AB-BN-UR stable computational time three sets than other methods. For evaluation, this research used RMSE, MAE, MAPE.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fully Distributed Privacy Preserving Mini-batch Gradient Descent Learning

In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest ad...

متن کامل

A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit

Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search. This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit for achieving best performance in grid search by selecting an appropriate batch size at each epoch with a probability defined as a function of its previous su...

متن کامل

On Scalable Inference with Stochastic Gradient Descent

In many applications involving large dataset or online updating, stochastic gradient descent (SGD) provides a scalable way to compute parameter estimates and has gained increasing popularity due to its numerical convenience and memory efficiency. While the asymptotic properties of SGD-based estimators have been established decades ago, statistical inference such as interval estimation remains m...

متن کامل

mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

We propose a mini-batching scheme for improving the theoretical complexity and practical performance of semi-stochastic gradient descent applied to the problem of minimizing a strongly convex composite function represented as the sum of an average of a large number of smooth convex functions, and simple nonsmooth convex function. Our method first performs a deterministic step (computation of th...

متن کامل

Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption

We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Frontiers in artificial intelligence and applications

سال: 2022

ISSN: ['1879-8314', '0922-6389']

DOI: https://doi.org/10.3233/faia220387